5 research outputs found

    Predicting range of acceptable photographic tonal adjustments

    No full text
    Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.Cataloged from PDF version of thesis.Includes bibliographical references (pages 57-58).There is often more than one way to select tonal adjustment for a photograph, and different individuals may prefer different adjustments. However, selecting good adjustments is challenging. This thesis describes a method to predict whether a given tonal rendition is acceptable for a photograph, which we use to characterize its range of acceptable adjustments. We gathered a dataset of image "acceptability" over brightness and contrast adjustments. We find that unacceptable renditions can be explained in terms of over-exposure, under-exposure, and low contrast. Based on this observation, we propose a machine-learning algorithm to assess whether an adjusted photograph looks acceptable. We show that our algorithm can differentiate unsightly renditions from reasonable ones. Finally, we describe proof-of-concept applications that use our algorithm to guide the exploration of the possible tonal renditions of a photograph.by Ronnachai Jaroensri.S.M

    Learning to solve problems in computer vision with synthetic data

    No full text
    Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019Cataloged from PDF version of thesis.Includes bibliographical references (pages 121-130).Deep neural networks (DNN) have become the tool of choice for many researchers due to their superior performance. However, for DNNs to reach their full potential, a large enough dataset must be available. This poses severe limitation over problems that DNN can be applied to. Fortunately, many problems in computer vision have well-understood physical models, and can be simulated readily. This thesis considers the use of synthetic data to allow the use of DNN to solve problems in computer vision. First, we consider using synthetic data for problems where collection of real data is not feasible. We focus on the problem of magnifying small motion in videos. Using synthetic data allows us to train DNN models that magnify motion with reduced artifacts and better noise handling compared to traditional signal-processing based algorithm. Then, we discuss the importance of realism of the generated data. We focus on realistic camera pipeline simulation, and use it to study blind denoising in real images. We show that our noise simulation based on realistic camera pipeline significantly outperforms simplified noise models commonly used in the literature. Finally, we show that synthetic data can also be useful for a more general computer vision research. We use synthetic data to study the effect of label quality to the semantic segmentation task. Synthetic data provides us with large enough datasets that we can study the trade-off between quality and quantity of the data. We find that the accuracy of prediction depends largely on the estimated time required for human to annotate data, and that fine-tuning prediction after training on low-quality labels offers the best trade-off between effort to annotate and the accuracy.by Ronnachai Jaroensri.Ph. D.Ph.D. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Scienc

    On the Importance of Label Quality for Semantic Segmentation

    No full text
    Convolutional networks (ConvNets) have become the dominant approach to semantic image segmentation. Producing accurate, pixel-level labels required for this task is a tedious and time consuming process; however, producing approximate, coarse labels could take only a fraction of the time and effort. We investigate the relationship between the quality of labels and the performance of ConvNets for semantic segmentation. We create a very large synthetic dataset with perfectly labeled street view scenes. From these perfect labels, we synthetically coarsen labels with different qualities and estimate human-hours required for producing them. We perform a series of experiments by training ConvNets with a varying number of training images and label quality. We found that the performance of ConvNets mostly depends on the time spent creating the training labels. That is, a larger coarsely-annotated dataset can yield the same performance as a smaller finely-annotated one. Furthermore, fine-tuning coarsely pre-trained ConvNets with few finely-annotated labels can yield comparable or superior performance to training it with a large amount of finely-annotated labels alone, at a fraction of the labeling cost. We demonstrate that our result is also valid for different network architectures, and various object classes in an urban scene

    Predicting Range of Acceptable Photographic Tonal Adjustments

    No full text
    © 2015 IEEE. There is often more than one way to select tonal adjustment for a photograph, and different individuals may prefer different adjustments. However, selecting good adjustments is challenging. This paper describes a method to predict whether a given tonal rendition is acceptable for a photograph, which we use to characterize its range of acceptable adjustments. We gathered a dataset of image acceptability'' over brightness and contrast adjustments. We find that unacceptable renditions can be explained in terms of over-exposure, under-exposure, and low contrast. Based on this observation, we propose a machine-learning algorithm to assess whether an adjusted photograph looks acceptable. We show that our algorithm can differentiate unsightly renditions from reasonable ones. Finally, we describe proof-of- concept applications that use our algorithm to guide the exploration of the possible tonal renditions of a photograph
    corecore